471 research outputs found
Reducing the Need for Backpropagation and Discovering Better Optima With Explicit Optimizations of Neural Networks
Iterative differential approximation methods that rely upon backpropagation
have enabled the optimization of neural networks; however, at present, they
remain computationally expensive, especially when training models at scale. In
this paper, we propose a computationally efficient alternative for optimizing
neural networks that can both reduce the costs of scaling neural networks and
provide high-efficiency optimizations for low-resource applications. We derive
an explicit solution to a simple feed-forward language model (LM) by
mathematically analyzing its gradients. This solution generalizes from
single-layer LMs to the class of all single-layer feed-forward
softmax-activated neural models trained on positive-valued features, as is
demonstrated by our extension of this solution application to MNIST digit
classification. For both LM and digit classifiers, we find computationally that
explicit solutions perform near-optimality in experiments showing that 1)
iterative optimization only marginally improves the explicit solution
parameters and 2) randomly initialized parameters iteratively optimize towards
the explicit solution. We also preliminarily apply the explicit solution
locally by layer in multi-layer networks and discuss how the solution's
computational savings increase with model complexity -- for both single- and
mult-layer applications of the explicit solution, we emphasize that the optima
achieved cannot be reached by backpropagation alone, i.e., better optima appear
discoverable only after explicit solutions are applied. Finally, we discuss the
solution's computational savings alongside its impact on model interpretability
and suggest future directions for the derivation of explicit solutions to
complex- and multi-layer architectures
Explicit Foundation Model Optimization with Self-Attentive Feed-Forward Neural Units
Iterative approximation methods using backpropagation enable the optimization
of neural networks, but they remain computationally expensive, especially when
used at scale. This paper presents an efficient alternative for optimizing
neural networks that reduces the costs of scaling neural networks and provides
high-efficiency optimizations for low-resource applications. We will discuss a
general result about feed-forward neural networks and then extend this solution
to compositional (mult-layer) networks, which are applied to a simplified
transformer block containing feed-forward and self-attention layers. These
models are used to train highly-specified and complex multi-layer neural
architectures that we refer to as self-attentive feed-forward unit (SAFFU)
layers, which we use to develop a transformer that appears to generalize well
over small, cognitively-feasible, volumes of data. Testing demonstrates
explicit solutions outperform models optimized by backpropagation alone.
Moreover, further application of backpropagation after explicit solutions leads
to better optima from smaller scales of data, training effective models from
much less data is enabled by explicit solution warm starts. We then carry out
ablation experiments training a roadmap of about 250 transformer models over
1-million tokens to determine ideal settings. We find that multiple different
architectural variants produce highly-performant models, and discover from this
ablation that some of the best are not the most parameterized. This appears to
indicate well-generalized models could be reached using less data by using
explicit solutions, and that architectural exploration using explicit solutions
pays dividends in guiding the search for efficient variants with fewer
parameters, and which could be incorporated into low-resource hardware where AI
might be embodied
Using data-driven sublanguage pattern mining to induce knowledge models: application in medical image reports knowledge representation
Background: The use of knowledge models facilitates information retrieval, knowledge base development, and therefore supports new knowledge discovery that ultimately enables decision support applications. Most existing works have employed machine learning techniques to construct a knowledge base. However, they often suffer from low precision in extracting entity and relationships. In this paper, we described a data-driven sublanguage pattern mining method that can be used to create a knowledge model. We combined natural language processing (NLP) and semantic network analysis in our model generation pipeline.
Methods: As a use case of our pipeline, we utilized data from an open source imaging case repository, Radiopaedia.org, to generate a knowledge model that represents the contents of medical imaging reports. We extracted entities and relationships using the Stanford part-of-speech parser and the “Subject:Relationship:Object” syntactic data schema. The identified noun phrases were tagged with the Unified Medical Language System (UMLS) semantic types. An evaluation was done on a dataset comprised of 83 image notes from four data sources.
Results: A semantic type network was built based on the co-occurrence of 135 UMLS semantic types in 23,410 medical image reports. By regrouping the semantic types and generalizing the semantic network, we created a knowledge model that contains 14 semantic categories. Our knowledge model was able to cover 98% of the content in the evaluation corpus and revealed 97% of the relationships. Machine annotation achieved a precision of 87%, recall of 79%, and F-score of 82%.
Conclusion: The results indicated that our pipeline was able to produce a comprehensive content-based knowledge model that could represent context from various sources in the same domain
- …